Goto

Collaborating Authors

 Health Law


Gendered Divides in Online Discussions about Reproductive Rights

Rao, Ashwin, Wang, Sze Yuh Nina, Lerman, Kristina

arXiv.org Artificial Intelligence

The U.S. Supreme Court's 2022 ruling in Dobbs v. Jackson Women's Health Organization marked a turning point in the national debate over reproductive rights. While the ideological divide over abortion is well documented, less is known about how gender and local sociopolitical contexts interact to shape public discourse. Drawing on nearly 10 million abortion-related posts on X (formerly T witter) from users with inferred gender, ideology and location, we show that gender significantly moderates abortion attitudes and emotional expression, particularly in conservative regions, and independently of ideology. This creates a gender gap in abortion attitudes that grows more pronounced in conservative regions. The leak of the Dobbs draft opinion further intensified online engagement, disproportionately mobilizing pro-abortion women in areas where access was under threat. These findings reveal that abortion discourse is not only ideologically polarized but also deeply structured by gender and place, highlighting the central the role of identity in shaping political expression during moments of institutional disruption. 1 Long a flashpoint in cultural and political battles, abortion debates have come to symbolize broader struggles over bodily autonomy, religious freedom, and gender equality. The 2022 Supreme Court ruling in Dobbs v. Jackson Women's Health Organization, which overturned nearly five decades of federal protections for abortion access established by Roe v. Wade, marked a seismic shift. It not only intensified existing partisan divides ( 1, 2), but also reshaped the legal and political terrain, triggering abrupt policy reversals in many states and catalyzing a realignment in the national debate over reproductive rights. A growing body of research has documented partisan cleavages in public attitudes toward reproductive rights ( 1, 3-7). However, less attention has been paid to the way in which gender and sociopolitical environment jointly shape both opinion formation and patterns of public expression. Recent surveys point to a widening gender gap in political orientation, particularly among younger voters. For example, in the 2024 U.S. presidential election, white men predominantly supported President Trump, while white women preferred Vice President Harris ( 8). Similarly, Gallup polling found a sharp increase in the share of young women identifying as politically liberal and supporting reproductive rights ( 9). While women consistently report higher support for abortion access, particularly in countries with less restrictive policy environments ( 10, 11), men, even those who identify as pro-choice, often show less engagement with the issue ( 11-13). Prior work has also documented gendered modes of engagement in online discourse around reproductive rights ( 1, 2).


Uhura: A Benchmark for Evaluating Scientific Question Answering and Truthfulness in Low-Resource African Languages

Bayes, Edward, Azime, Israel Abebe, Alabi, Jesujoba O., Kgomo, Jonas, Eloundou, Tyna, Proehl, Elizabeth, Chen, Kai, Khadir, Imaan, Etori, Naome A., Muhammad, Shamsuddeen Hassan, Mpanza, Choice, Thete, Igneciah Pocia, Klakow, Dietrich, Adelani, David Ifeoluwa

arXiv.org Artificial Intelligence

Evaluations of Large Language Models (LLMs) on knowledge-intensive tasks and factual accuracy often focus on high-resource languages primarily because datasets for low-resource languages (LRLs) are scarce. In this paper, we present Uhura -- a new benchmark that focuses on two tasks in six typologically-diverse African languages, created via human translation of existing English benchmarks. The first dataset, Uhura-ARC-Easy, is composed of multiple-choice science questions. The second, Uhura-TruthfulQA, is a safety benchmark testing the truthfulness of models on topics including health, law, finance, and politics. We highlight the challenges creating benchmarks with highly technical content for LRLs and outline mitigation strategies. Our evaluation reveals a significant performance gap between proprietary models such as GPT-4o and o1-preview, and Claude models, and open-source models like Meta's LLaMA and Google's Gemma. Additionally, all models perform better in English than in African languages. These results indicate that LMs struggle with answering scientific questions and are more prone to generating false claims in low-resource African languages. Our findings underscore the necessity for continuous improvement of multilingual LM capabilities in LRL settings to ensure safe and reliable use in real-world contexts. We open-source the Uhura Benchmark and Uhura Platform to foster further research and development in NLP for LRLs.


Values in AI: bioethics and the intentions of machines and people - AI and Ethics

#artificialintelligence

Artificial intelligence has the potential to impose the values of its creators on its users, those affected by it, and society. Users also may mean to use a technological device in an illicit or unexpected way. Devices change people's intentions as they are empowered by technology. What people mean to do with the help of technology reflects their choices, preferences, and values. Technology is a disruptor that impacts society as a whole.


After Roe v. Wade cat email gaffe, Sony and Insomniac plan donations

Washington Post - Technology News

Following that gaffe, Insomniac, the Sony subsidiary behind "Ratchet & Clank" and "Marvel's Spider-Man," plans to donate $50,000 to the Women's Reproductive Rights Assistance Project (WRRAP), according to an internal email sent May 13 from Insomniac CEO Ted Price viewed by The Washington Post. Sony will match the donation, along with donations from individual Insomniac employees if they make them via the company's "PlayStation Cares" program. In addition, Sony now plans to formulate an initiative to provide financial assistance to employees who might have to travel to different states to receive reproductive care. Insomniac will aid in formulating that policy.


AI, HEALTH CARE AND LAW: PART 1

#artificialintelligence

International Law and Health Related International Standard Setting Instruments play an important role in evolution and development of International Health Law. Conventional International Law is the primary International Legal Instrument through which International Organisations can extend International Cooperation for improving the Global Health Status as also reducing the Global Burden of Diseases. In the recent times, there has been an increase in the Inter-Governmental Organisations in the domain of Health Care. Let us take the instance of the growing diversity of International Law relating to Public Health wherein a broad array of Inter-Governmental Organisations including United Nations and its agencies and other related bodies are contributing to the development of International Health Law. The International Health Law is therefore emerging in a fragmented and amorphous manner.


Health Care Analytics

#artificialintelligence

Health care analytics solutions from SAS provide insights that drive value-based health care. With solutions for better population health, more efficient health care operations, better detection and prevention of health care fraud, waste and abuse, SAS accelerates your time to value.


UNESCO Chair in Bioethics and Human Rights

#artificialintelligence

We are proud to share some words of the letter received from Borhene Chakroun, Director of Education Sector Division for policies and lifelong learning Systems (UNESCO). "In light of the very good results achieved by the above-mentioned Chair, confirmed by the...


Twitter Sentiment on Affordable Care Act using Score Embedding

Farhadloo, Mohsen

arXiv.org Machine Learning

Mohsen Farhadloo, PhD John Molson Scool of Business, Concordia University mohsen.farhadloo@concordia.ca August 21, 2019 Abstract In this paper we introduce score embedding, a neural network based model to learn interpretable vector representations for words. Score embedding is a supervised method that takes advantage of the labeled training data and the neural network architecture to learn interpretable representations for words. Health care has been a controversial issue between political parties in the United States. In this paper we use the discussions on Twitter regarding different issues of affordable care act to identify the public opinion about the existing health care plans using the proposed score embedding. Our results indicate our approach effectively incorporates the sentiment information and outperforms or is at least comparable to the state-of-the-art methods and the negative sentiment towards "TrumpCare" was consistently greater than neutral and positive sentiment over time. 1 Introduction Sentiment analysis as a type of text categorization is the task of identifying the sentiment orientation of documents written in natural language which assigns one of the predefined sentiment categories into a whole document or pieces of the document such as phrases or sentences [23, 8]. Many studies used binary classification and reported high performance [18, 29, 24] and some studies have observed that the performance of the categorization reduces as the number of sentiment categories increases [2, 16, 3, 11]. Bag-Of-Words (BOW), a standard approach for text categorization, represents a document by a vector that indicates the words that appear in the document.


Synthetic learner: model-free inference on treatments over time

Viviano, Davide, Bradic, Jelena

arXiv.org Machine Learning

Understanding of the effect of a particular treatment or a policy pertains to many areas of interest -- ranging from political economics, marketing to health-care and personalized treatment studies. In this paper, we develop a non-parametric, model-free test for detecting the effects of treatment over time that extends widely used Synthetic Control tests. The test is built on counterfactual predictions arising from many learning algorithms. In the Neyman-Rubin potential outcome framework with possible carry-over effects, we show that the proposed test is asymptotically consistent for stationary, beta mixing processes. We do not assume that class of learners captures the correct model necessarily. We also discuss estimates of the average treatment effect, and we provide regret bounds on the predictive performance. To the best of our knowledge, this is the first set of results that allow for example any Random Forest to be useful for provably valid statistical inference in the Synthetic Control setting. In experiments, we show that our Synthetic Learner is substantially more powerful than classical methods based on Synthetic Control or Difference-in-Differences, especially in the presence of non-linear outcome models.


U of T's leading bioethics centre to design ethical artificial intelligence for health

#artificialintelligence

Partnership between Joint Centre for Bioethics and AMS Healthcare to shape the future of artificial intelligence in Canada's health system A new partnership with AMS Healthcare is supporting the University of Toronto Joint Centre for Bioethics (JCB) accelerate knowledge and inform practice on ethical artificial intelligence (AI) in health care. "We are thrilled to partner with AMS Healthcare in exploring how AI may be a force for good to improve health and health care, particularly from the perspective of patients and providers," said Jennifer Gibson, JCB Director, based at the Dalla Lana School of Public Health. The gift is supporting JCB's AI and the Future of Caring initiative, one of four priority themes in the JCB's Ethics and AI for Good Health strategy. The other three priority areas are: Public Trust of AI for Health; Ethical Governance of AI for Health; and Equity and the Digital Divide. AI and related digital health technologies hold promise for promoting healthy behaviours, enabling prevention, diagnosis, and treatment of disease, and addressing health equity gaps in health policy and planning. However, there are important ethical questions about what impact AI-enabled health care and related technologies will have on patient-provider relationships and on public trust.